我们提出了一种使用合理的心形和现实外观合成心脏MR图像的方法,目的是生成标记的数据进行深度学习(DL)训练。它将图像合成分解为标签变形和标签到图像翻译任务。前者是通过VAE模型中的潜在空间插值来实现的,而后者是通过条件GAN模型完成的。我们设计了一种在受过训练的VAE模型的潜在空间中的标记操纵方法,即病理合成,旨在合成一系列具有所需心脏病特征的伪病理合成受试者。此外,我们建议通过估计潜在矢量之间的相关系数矩阵来对2D切片之间的关系进行建模,并利用它在解码到图像空间之前将样品随机绘制的元素关联。这种简单而有效的方法导致从2D片段产生3D一致的受试者。这种方法可以提供一种解决方案,以多样化和丰富心脏MR图像的可用数据库,并为开发基于DL的图像分析算法的开发铺平道路。该代码将在https://github.com/sinaamirrajab/cardiacpathologysynthesis中找到。
translated by 谷歌翻译
对于基于MR物理学的模拟,对虚拟心脏MR图像的数据库进行了极大的兴趣,以开发深度学习分析网络。但是,这种数据库的使用受到限制或由于现实差距,缺失纹理以及模拟图像的简化外观而显示出次优性能。在这项工作中,我们1)在虚拟XCAT主题上提供不同的解剖学模拟,以及2)提出SIM2Real翻译网络以改善图像现实主义。我们的可用性实验表明,SIM2REAL数据具有增强训练数据并提高分割算法的性能的良好潜力。
translated by 谷歌翻译
Vision transformers have emerged as powerful tools for many computer vision tasks. It has been shown that their features and class tokens can be used for salient object segmentation. However, the properties of segmentation transformers remain largely unstudied. In this work we conduct an in-depth study of the spatial attentions of different backbone layers of semantic segmentation transformers and uncover interesting properties. The spatial attentions of a patch intersecting with an object tend to concentrate within the object, whereas the attentions of larger, more uniform image areas rather follow a diffusive behavior. In other words, vision transformers trained to segment a fixed set of object classes generalize to objects well beyond this set. We exploit this by extracting heatmaps that can be used to segment unknown objects within diverse backgrounds, such as obstacles in traffic scenes. Our method is training-free and its computational overhead negligible. We use off-the-shelf transformers trained for street-scene segmentation to process other scene types.
translated by 谷歌翻译
In this research, we are about to present an agentbased model of human muscle which can be used in analysis of human movement. As the model is designed based on the physiological structure of the muscle, The simulation calculations would be natural, and also, It can be possible to analyze human movement using reverse engineering methods. The model is also a suitable choice to be used in modern prostheses, because the calculation of the model is less than other machine learning models such as artificial neural network algorithms and It makes our algorithm battery-friendly. We will also devise a method that can calculate the intensity of human muscle during gait cycle using a reverse engineering solution. The algorithm called Boots is different from some optimization methods, so It would be able to compute the activities of both agonist and antagonist muscles in a joint. As a consequence, By having an agent-based model of human muscle and Boots algorithm, We would be capable to develop software that can calculate the nervous stimulation of human's lower body muscle based on the angular displacement during gait cycle without using painful methods like electromyography. By developing the application as open-source software, We are hopeful to help researchers and physicians who are studying in medical and biomechanical fields.
translated by 谷歌翻译
The number of international benchmarking competitions is steadily increasing in various fields of machine learning (ML) research and practice. So far, however, little is known about the common practice as well as bottlenecks faced by the community in tackling the research questions posed. To shed light on the status quo of algorithm development in the specific field of biomedical imaging analysis, we designed an international survey that was issued to all participants of challenges conducted in conjunction with the IEEE ISBI 2021 and MICCAI 2021 conferences (80 competitions in total). The survey covered participants' expertise and working environments, their chosen strategies, as well as algorithm characteristics. A median of 72% challenge participants took part in the survey. According to our results, knowledge exchange was the primary incentive (70%) for participation, while the reception of prize money played only a minor role (16%). While a median of 80 working hours was spent on method development, a large portion of participants stated that they did not have enough time for method development (32%). 25% perceived the infrastructure to be a bottleneck. Overall, 94% of all solutions were deep learning-based. Of these, 84% were based on standard architectures. 43% of the respondents reported that the data samples (e.g., images) were too large to be processed at once. This was most commonly addressed by patch-based training (69%), downsampling (37%), and solving 3D analysis tasks as a series of 2D tasks. K-fold cross-validation on the training set was performed by only 37% of the participants and only 50% of the participants performed ensembling based on multiple identical models (61%) or heterogeneous models (39%). 48% of the respondents applied postprocessing steps.
translated by 谷歌翻译
We study critical systems that allocate scarce resources to satisfy basic needs, such as homeless services that provide housing. These systems often support communities disproportionately affected by systemic racial, gender, or other injustices, so it is crucial to design these systems with fairness considerations in mind. To address this problem, we propose a framework for evaluating fairness in contextual resource allocation systems that is inspired by fairness metrics in machine learning. This framework can be applied to evaluate the fairness properties of a historical policy, as well as to impose constraints in the design of new (counterfactual) allocation policies. Our work culminates with a set of incompatibility results that investigate the interplay between the different fairness metrics we propose. Notably, we demonstrate that: 1) fairness in allocation and fairness in outcomes are usually incompatible; 2) policies that prioritize based on a vulnerability score will usually result in unequal outcomes across groups, even if the score is perfectly calibrated; 3) policies using contextual information beyond what is needed to characterize baseline risk and treatment effects can be fairer in their outcomes than those using just baseline risk and treatment effects; and 4) policies using group status in addition to baseline risk and treatment effects are as fair as possible given all available information. Our framework can help guide the discussion among stakeholders in deciding which fairness metrics to impose when allocating scarce resources.
translated by 谷歌翻译
Different types of mental rotation tests have been used extensively in psychology to understand human visual reasoning and perception. Understanding what an object or visual scene would look like from another viewpoint is a challenging problem that is made even harder if it must be performed from a single image. We explore a controlled setting whereby questions are posed about the properties of a scene if that scene was observed from another viewpoint. To do this we have created a new version of the CLEVR dataset that we call CLEVR Mental Rotation Tests (CLEVR-MRT). Using CLEVR-MRT we examine standard methods, show how they fall short, then explore novel neural architectures that involve inferring volumetric representations of a scene. These volumes can be manipulated via camera-conditioned transformations to answer the question. We examine the efficacy of different model variants through rigorous ablations and demonstrate the efficacy of volumetric representations.
translated by 谷歌翻译
The Universal Feature Selection Tool (UniFeat) is an open-source tool developed entirely in Java for performing feature selection processes in various research areas. It provides a set of well-known and advanced feature selection methods within its significant auxiliary tools. This allows users to compare the performance of feature selection methods. Moreover, due to the open-source nature of UniFeat, researchers can use and modify it in their research, which facilitates the rapid development of new feature selection algorithms.
translated by 谷歌翻译
With advanced imaging, sequencing, and profiling technologies, multiple omics data become increasingly available and hold promises for many healthcare applications such as cancer diagnosis and treatment. Multimodal learning for integrative multi-omics analysis can help researchers and practitioners gain deep insights into human diseases and improve clinical decisions. However, several challenges are hindering the development in this area, including the availability of easily accessible open-source tools. This survey aims to provide an up-to-date overview of the data challenges, fusion approaches, datasets, and software tools from several new perspectives. We identify and investigate various omics data challenges that can help us understand the field better. We categorize fusion approaches comprehensively to cover existing methods in this area. We collect existing open-source tools to facilitate their broader utilization and development. We explore a broad range of omics data modalities and a list of accessible datasets. Finally, we summarize future directions that can potentially address existing gaps and answer the pressing need to advance multimodal learning for multi-omics data analysis.
translated by 谷歌翻译
光学相干断层扫描(OCT)有助于眼科医生评估黄斑水肿,流体的积累以及微观分辨率的病变。视网膜流体的定量对于OCT引导的治疗管理是必需的,这取决于精确的图像分割步骤。由于对视网膜流体的手动分析是一项耗时,主观和容易出错的任务,因此对快速和健壮的自动解决方案的需求增加了。在这项研究中,提出了一种名为Retifluidnet的新型卷积神经结构,用于多级视网膜流体分割。该模型受益于层次表示使用新的自适应双重注意(SDA)模块的纹理,上下文和边缘特征的学习,多个基于自适应的Skip Connections(SASC)以及一种新颖的多尺度深度自我监督学习(DSL)方案。拟议的SDA模块中的注意机制使该模型能够自动提取不同级别的变形感知表示,并且引入的SASC路径进一步考虑了空间通道相互依存,以串联编码器和解码器单元,从而提高了表示能力。还使用包含加权版本的骰子重叠和基于边缘的连接损失的联合损失函数进行了优化的retifluidnet,其中将多尺度局部损失的几个分层阶段集成到优化过程中。该模型根据三个公开可用数据集进行验证:润饰,Optima和Duke,并与几个基线进行了比较。数据集的实验结果证明了在视网膜OCT分割中提出的模型的有效性,并揭示了建议的方法比现有的最新流体分割算法更有效,以适应各种图像扫描仪器记录的视网膜OCT扫描。
translated by 谷歌翻译